The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions, spanning applications from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to develop effective means of representing harmonic functions in the context of machine learning architectures, either in machine learning on classical computers, or in the nascent field of quantum machine learning. Architectures which impose or encourage an inductive bias towards harmonic functions would facilitate data-driven modelling and the solution of inverse problems in a range of applications. For classical neural networks, it has already been established how leveraging inductive biases can in general lead to improved performance of learning algorithms. The introduction of such inductive biases within a quantum machine learning setting is instead still in its nascent stages. In this work, we derive exactly-harmonic (conventional- and quantum-) neural networks in two dimensions for simply-connected domains by leveraging the characteristics of holomorphic complex functions. We then demonstrate how these can be approximately extended to multiply-connected two-dimensional domains using techniques inspired by domain decomposition in physics-informed neural networks. We further provide architectures and training protocols to effectively impose approximately harmonic constraints in three dimensions and higher, and as a corollary we report divergence-free network architectures in arbitrary dimensions. Our approaches are demonstrated with applications to heat transfer, electrostatics and robot navigation, with comparisons to physics-informed neural networks included.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
We adapt Lee et al.'s (2018) span-based entity coreference model to the task of end-to-end discourse deixis resolution in dialogue, specifically by proposing extensions to their model that exploit task-specific characteristics. The resulting model, dd-utt, achieves state-of-the-art results on the four datasets in the CODI-CRAC 2021 shared task.
translated by 谷歌翻译
Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog. However, they sometimes generate unsupported or misleading content. A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence. To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible. When applied to the output of several state-of-the-art LMs on a diverse set of generation tasks, we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models. Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.
translated by 谷歌翻译
部分微分方程(PDE)用于对科学和工程中的各种动力系统进行建模。深度学习的最新进展使我们能够以新的方式解决维度的诅咒,从而在更高的维度中解决它们。但是,深度学习方法受到训练时间和记忆的约束。为了解决这些缺点,我们实施了张量神经网络(TNN),这是一种量子启发的神经网络体系结构,利用张量网络的想法来改进深度学习方法。我们证明,与经典密集神经网络(DNN)相比,TNN提供了明显的参数节省,同时获得了与经典密集的神经网络相同的准确性。此外,我们还展示了如何以相同的精度来比DNN更快地训练TNN。我们通过将它们应用于求解抛物线PDE,特别是Black-Scholes-Barenblatt方程,该方程广泛用于金融定价理论,基于基准测试。还讨论了进一步的例子,例如汉密尔顿 - 雅各比 - 贝尔曼方程。
translated by 谷歌翻译
持续学习旨在使单个模型能够学习一系列任务,而不会造成灾难性的遗忘。表现最好的方法通常需要排练缓冲区来存储过去的原始示例以进行经验重播,但是,由于隐私和内存约束,这会限制其实际价值。在这项工作中,我们提出了一个简单而有效的框架,即DualPrompt,该框架学习了一组称为提示的参数,以正确指示预先训练的模型,以依次学习到达的任务,而不会缓冲过去的示例。 DualPrompt提出了一种新颖的方法,可以将互补提示附加到预训练的主链上,然后将目标提出为学习任务不变和特定于任务的“指令”。通过广泛的实验验证,双启示始终在具有挑战性的课堂开发环境下始终设置最先进的表现。尤其是,双启示的表现优于最近的高级持续学习方法,其缓冲尺寸相对较大。我们还引入了一个更具挑战性的基准Split Imagenet-R,以帮助概括无连续的持续学习研究。源代码可在https://github.com/google-research/l2p上找到。
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
最近,移动健康(MHealth)信息服务的使用增长,这些信息提供了有关改善体育活动的丰富指南。这些丰富的指南源于考虑各种个人行为因素,这些因素通常会偏离用户的健康状况。行为因素包括改变健身偏好,依从性问题以及对未来健身结果的不确定性,这可能都导致MHealth信息服务质量的下降。由于用户健康状况的动态,这些MHealth信息服务中有许多提供了有限的健身指南。本文使用深度强化学习寻求一种自适应方法,以提出个性化的体育活动建议,这是从回顾性的体育活动数据中学到的,并可以模拟现实的行为轨迹。我们基于有关体育活动的科学知识来为MHealth信息服务系统构建实时交互模型,以评估其运动表现。体育活动绩效评估模型用于考虑适应性和疲劳效果的最佳运动强度,以避免缺乏运动或超负荷。短期活动计划是使用深入的强化学习和个人健康状况随着时间而变化的。使用此方法,我们可以根据实际实施行为动态更新体育活动建议策略。通过与其他基准政策进行比较,我们基于DRL的推荐政策得到了验证。实验结果表明,这种自适应学习算法可以将推荐性能提高到4.13%以上。
translated by 谷歌翻译
我们的目标是量化热带旋风(TC)卫星图像中的时空模式是否以及如何量化,信号是即将发生的快速强度变化事件。为了解决这个问题,我们提出了一个新的非参数测试,对图像的时间序列和一系列二进制事件标签之间的关联测试。我们询问在事件之前与非事件之前的图像的24小时序列之间的分布差异(相关但分布相同)之间是否存在差异。通过将统计检验重写为回归问题,我们利用神经网络来推断TC对流的结构演变模式,这些模式代表了促进快速强度变化事件的导致。附近序列之间的依赖性通过估计标签系列边际分布的自举程序来处理。我们证明,只要标签系列的分布得到充分估计,就可以保证I型错误控制,这可以通过二进制TC事件标签的广泛历史数据更容易。我们表明的经验证据表明,我们提出的方法确定了与快速强化风险相关的红外图像原型,通常以随着时间的推移深度或深化核心对流标记。这样的结果为改善快速强化的预测提供了基础。
translated by 谷歌翻译